Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Algorithms are presented that efficiently shape the parity bits of systematic irregular repeat-accumulate (IRA) low-density parity-check (LDPC) codes by following the sequential encoding order of the accumulator. Simulations over additive white Gaussian noise (AWGN) channels with on-off keying show a gain of up to 0.9 dB over uniform signaling.more » « less
- 
            Convolutional codes are widely used in many applications. The encoders can be implemented with a simple circuit. Decoding is often accomplished by the Viterbi algorithm or the maximum a-posteriori decoder of Bahl et al. These algorithms are sequential in nature, requiring a decoding time proportional to the message length. For low latency applications this this latency might be problematic. This paper introduces a low-latency decoder for tail-biting convolutional codes TBCCs that processes multiple trellis stages in parallel. The new decoder is designed for hardware with parallel processing capabilities. The overall decoding latency is proportional to the log of the message length. The new decoding architecture is modified into a list decoder, and the list decoding performance can be enhanced by exploiting linearity to expand the search space. Certain modifications to standard TBCCs are supported by the new architecture and improve frame error rate performance.more » « lessFree, publicly-accessible full text available March 10, 2026
- 
            Posterior matching uses variable-length encoding of the message controlled by noiseless feedback of the received symbols to achieve high rates for short average blocklengths. Traditionally, the feedback of a received symbol occurs before the next symbol is transmitted. The transmitter optimizes the next symbol transmission with full knowledge of every past received symbol. To move posterior matching closer to practical communication, this paper seeks to constrain how often feedback can be sent back to the transmitter. We focus on reducing the frequency of the feedback while still maintaining the high rates that posterior matching achieves with feedback after every symbol. As it turns out, the frequency of the feedback can be reduced significantly with no noticeable reduction in rate.more » « lessFree, publicly-accessible full text available December 8, 2025
- 
            This paper presents new achievability bounds on the maximal achievable rate of variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC) at a fixed message size M=2^k . We provide bounds for two cases: The first case considers VLSF codes with possibly infinite decoding times and zero error probability. The second case limits the maximum (finite) number of decoding times and specifies a maximum tolerable probability of error. Both new achievability bounds are proved by constructing a new VLSF code that employs systematic transmission of the first k message bits followed by random linear fountain parity bits decoded with a rank decoder. For VLSF codes with infinite decoding times, our new bound outperforms the state-of-the-art result for BEC by Devassy et al. in 2016. We show that the backoff from capacity reduces to zero as the erasure probability decreases, thus giving a negative answer to the open question Devassy et al. posed on whether the 23.4% backoff to capacity at k=3 is fundamental to all BECs. For VLSF codes with finite decoding times, numerical evaluations show that the systematic transmission followed by random linear fountain coding performs better than random linear coding in terms of achievable rates.more » « less
- 
            For a two-variance model of the Flash read channel that degrades as a function of the number of program/erase cycles, this paper demonstrates that selecting write voltages to maximize the minimum page mutual information (MI) can increase device lifetime. In multi-level cell (MLC) Flash memory, one of four voltage levels is written to each cell, according to the values of the most-significant bit (MSB) page and the least-significant bit (LSB) page. In our model, each voltage level is then distorted by signal-dependent additive Gaussian noise that approximates the Flash read channel. When performing an initial read of a page in MLC flash, one (for LSB) or two (for MSB) bits of information are read for each cell of the page. If LDPC decoding fails after the initial read, then an enhanced-precision read is performed. This paper shows that jointly designing write voltage levels and read thresholds to maximize the minimum MI between a page and its associated initial or enhanced-precision read bits can improve LDPC decoding performance.more » « less
- 
            With a sufficiently large list size, the serial list Viterbi algorithm (S-LVA) provides maximum likelihood (ML) decoding of a concatenated convolutional code (CC) and an expurgating linear function (ELF), which is similar in function to a cyclic redundancy check (CRC), but doesn't enforce that the code be cyclic. However, S-LVA with a large list size requires considerable complexity. This paper exploits linearity to reduce decoding complexity for tail-biting CCs (TBCCs) concatenated with ELFs.more » « less
- 
            Convolutional codes have been widely studied and used in many systems. As the number of memory elements increases, frame error rate (FER) improves but computational complexity increases exponentially. Recently, decoders that achieve reduced average complexity through list decoding have been demonstrated when the convolutional encoder polynomials share a common factor that can be understood as a CRC or more generally an expurgating linear function (ELF). However, classical convolutional codes avoid such common factors because they result in a catastrophic encoder. This paper provides a way to access the complexity reduction possible with list decoding even when the convolutional encoder polynomials do not share a common factor. Decomposing the original code into component encoders that fully exclude some polynomials can allow an ELF to be factored from each component. Dual list decoding of the component encoders can often find the ML codeword. Including a fallback to regular Viterbi decoding yields excellent FER performance while requiring less average complexity than always performing Viterbi on the original trellis. A best effort dual list decoder that avoids Viterbi has performance similar to the ML decoder. Component encoders that have a shared polynomial allow for even greater complexity reduction.more » « less
- 
            Recently, neural networks have improved MinSum message-passing decoders for low-density parity-check (LDPC) codes by multiplying or adding weights to the messages, where the weights are determined by a neural network. The neural network complexity to determine distinct weights for each edge is high, often limiting the application to relatively short LDPC codes. Furthermore, storing separate weights for every edge and every iteration can be a burden for hardware implementations. To reduce neural network complexity and storage requirements, this paper proposes a family of weight-sharing schemes that use the same weight for edges that have the same check node degree and/or variable node degree. Our simulation results show that node-degree-based weight-sharing can deliver the same performance requiring distinct weights for each node. This paper also combines these degree-specific neural weights with a reconstruction-computation-quantization (RCQ) decoder to produce a weighted RCQ (W-RCQ) decoder. The W-RCQ decoder with node-degree-based weight sharing has a reduced hardware requirement compared with the original RCQ decoder. As an additional contribution, this paper identifies and resolves a gradient explosion issue that can arise when training neural LDPC decoders.more » « less
- 
            Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S=2v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S.This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures.more » « less
- 
            We extend earlier work on the design of convolutional code-specific CRC codes to Q -ary alphabets, with an eye toward Q -ary orthogonal signaling. Starting with distance-spectrum optimal, zero-terminated, Q -ary convolutional codes, we design Q -ary CRC codes so that the CRC/convolutional concatenation is distance-spectrum optimal. The Q -ary code symbols are mapped to a Q -ary orthogonal signal set and sent over an AWGN channel with noncoherent reception. We focus on Q=4 , rate-1/2 convolutional codes in our designs. The random coding union bound and normal approximation are used in earlier works as benchmarks for performance for distance-spectrum-optimal convolutional codes. We derive a saddlepoint approximation of the random coding union bound for the coded noncoherent signaling channel, as well as a normal approximation for this channel, and compare the performance of our codes to these limits. Our best design is within 0.6 dB of the RCU bound at a frame error rate of 10 −4 .more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
